214 research outputs found

    Building cost-benefit models of information interactions

    Get PDF
    Modeling how people interact with search interfaces has been of particular interest and importance to the field of Interactive Information Retrieval. Recently, there has been a move to developing formal models of the interaction between the user and the system, whether it be to: (i) run a simulation, (ii) conduct an economic analysis, (iii) measure system performance, or (iv) simply to better understand user interactions and hypothesise about user behaviours. In such models, they consider the costs and the benefits that arise through the interaction with the interface/system and the information surfaced during the course of interaction. In this half day tutorial, we will focus on describing a series of cost-benefit models that have been proposed in the literature and how they have been applied in various scenarios. The tutorial will be structured into two parts. First, we will provide an overview of Decision Theory and Cost-Benefit Analysis techniques, and how they can and have be applied to a variety of Interactive Information Retrieval scenarios. For example, when do facets helps?, under what conditions are query suggestions useful? and is it better to bookmark or re-find? The second part of the tutorial will be dedicated to building cost-benefit models where we will discuss different techniques to build and develop such models. In the practical session, we will also discuss how costs and benefits can be estimated, and how the models can help inform and guide experimentation. During the tutorial participants will be challenged to build cost models for a number of problems (or even bring their own problems to solve)

    Time Pressure and System Delays in Information Search

    Get PDF
    We report preliminary results of the impact of time pres- sure and system delays on search behavior from a laboratory study with forty-three participants. To induce time pres- sure, we randomly assigned half of our study participants to a treatment condition where they were only allowed five minutes to search for each of four ad-hoc search topics. The other half of the participants were given no task time limits. For half of participants’ search tasks (n=2), five second de- lays were introduced after queries were submitted and SERP results were clicked. Results showed that participants in the time pressure condition queried at a significantly higher rate, viewed significantly fewer documents per query, had significantly shallower hover and view depths, and spent sig- nificantly less time examining documents and SERPs. We found few significant differences in search behavior for sys- tem delay or interaction effects between time pressure and system delay. These initial results show time pressure has a significant impact on search behavior and suggest the de- sign of search interfaces and features that support people who are searching under time pressure

    Report on the Information Retrieval Festival (IRFest2017)

    Get PDF
    The Information Retrieval Festival took place in April 2017 in Glasgow. The focus of the workshop was to bring together IR researchers from the various Scottish universities and beyond in order to facilitate more awareness, increased interaction and reflection on the status of the field and its future. The program included an industry session, research talks, demos and posters as well as two keynotes. The first keynote was delivered by Prof. Jaana Kekalenien, who provided a historical, critical reflection of realism in Interactive Information Retrieval Experimentation, while the second keynote was delivered by Prof. Maarten de Rijke, who argued for more Artificial Intelligence usage in IR solutions and deployments. The workshop was followed by a "Tour de Scotland" where delegates were taken from Glasgow to Aberdeen for the European Conference in Information Retrieval (ECIR 2017

    Personalised Search Time Prediction using Markov Chains

    Get PDF
    For improving the effectiveness of Interactive Information Retrieval (IIR), a system should minimise the search time by guiding the user appropriately. As a prerequisite, in any search situation, the system must be able to estimate the time the user will need for finding the next relevant document. In this paper, we show how Markov models derived from search logs can be used for predicting search times, and describe a method for evaluating these predictions. For personalising the predictions based upon a few user events observed, we devise appropriate parameter estimation methods. Our experimental results show that by observing users for only 100 seconds, the personalised predictions are already significantly better than global predictions

    CLEF 2017 dynamic search lab overview and evaluation

    Get PDF
    In this paper we provide an overview of the first edition of the CLEF Dynamic Search Lab. The CLEF Dynamic Search lab ran in the form of a workshop with the goal of approaching one key question: how can we evaluate dynamic search algorithms? Unlike static search algorithms, which essentially consider user request's independently, and which do not adapt the ranking w.r.t the user's sequence of interactions, dynamic search algorithms try to infer the user's intentions from their interactions and then adapt the ranking accordingly. Personalized session search, contextual search, and dialog systems often adopt such algorithms. This lab provides an opportunity for researchers to discuss the challenges faced when trying to measure and evaluate the performance of dynamic search algorithms, given the context of available corpora, simulations methods, and current evaluation metrics. To seed the discussion, a pilot task was run with the goal of producing search agents that could simulate the process of a user, interacting with a search system over the course of a search session. Herein, we describe the overall objectives of the CLEF 2017 Dynamic Search Lab, the resources created for the pilot task, the evaluation methodology adopted, and some preliminary evaluation results of the Pilot task

    Advances in formal models of search and search behaviour

    Get PDF
    Searching is performed in the context of a task and as such the value of the information found is with respect to the task. Recently, there has been a drive to developing formal models of information seeking and retrieval that consider the costs and benefits arising through the interaction with the interface/system and the information surfaced during that interaction. In this full day tutorial we will focus on describing and explaining some of the more recent and latest formal models of Information Seeking and Retrieval. The tutorial is structured into two parts. In the first part we will present a series of models that have been developed based on: (i) economic theory, (ii) decision theory (iii) game theory and (iv) optimal foraging theory. The second part of the day will be dedicated to building models where we will discuss different techniques to build and develop models from which we can draw testable hypotheses from. During the tutorial participants will be challenged to develop various formals models, applying the techniques learnt during the day. We will then conclude with presentations on solutions followed by a summary and overview of challenges and future directions. This tutorial is aimed at participants wanting to know more about the various formal models of information seeking, search and retrieval, that have been proposed. The tutorial will be presented at an intermediate level, and is designed to support participants who want to be able to understand and build such models

    Report on the Second International Workshop on the Evaluation on Collaborative Information Seeking and Retrieval (ECol'2017 @ CHIIR)

    Get PDF
    The 2nd workshop on the evaluation of collaborative information retrieval and seeking (ECol) was held in conjunction with the ACM SIGIR Conference on Human Information Interaction & Retrieval (CHIIR) in Oslo, Norway. The workshop focused on discussing the challenges and difficulties of researching and studying collaborative information retrieval and seeking (CIS/CIR). After an introductory and scene setting overview of developments in CIR/CIS, participants were challenged with devising a range of possible CIR/CIS tasks that could be used for evaluation purposes. Through the brainstorming and discussions, valuable insights regarding the evaluation of CIR/CIS tasks become apparent ? for particular tasks efficiency and/or effectiveness is most important, however for the majority of tasks the success and quality of outcomes along with knowledge sharing and sense-making were most important ? of which these latter attributes are much more difficult to measure and evaluate. Thus the major challenge for CIR/CIS research is to develop methods, measures and methodologies to evaluate these high order attributes

    SiS at CLEF 2017 eHealth tar task

    Get PDF
    This paper presents Strathclyde iSchool's (SiS) participation in the Technological Assisted Reviews in Empirical Medicine Task. For the ranking task, we explored two ways in which assistance to reviewers could be provided during the assessment process: (i) topic models, where we use Latent Dirichlet Allocation to identify topics within the set of retrieved documents, ranking documents by the topic most likely to be relevant and (ii) relevance feedback, where we use Rocchio's algorithm to update the query model for subsequent rounds of interaction. A third approach combines the topic and relevance feedback to quickly identify the relevant abstracts. For the thresholding task, we apply a score threshold, and exclude documents which did not exceed the threshold given BM25

    Cognitive biases in search : a review and reflection of cognitive biases in Information Retrieval

    Get PDF
    People are susceptible to an array of cognitive biases, which can result in systematic errors and deviations from rational decision making. Over the past decade, an increasing amount of attention has been paid towards investigating how cognitive biases influence information seeking and retrieval behaviours and outcomes. In particular, how such biases may negatively affect decisions because, for example, searchers may seek confirmatory but incorrect infor- mation or anchor on an initial search result even if its incorrect. In this perspectives paper, we aim to: (1) bring together and catalogue the emerging work on cognitive biases in the field of Information Retrieval; and (2) provide a critical review and reflection on these studies and subsequent findings. During our analysis we report on over thirty studies, that empirically examined cognitive biases in search, providing over forty key findings related to different domains (e.g. health, web, socio-political) and different parts of the search process (e.g. querying, assessing, judging, etc.). Our reflection highlights the importance of this research area, and critically discusses the limitations, difficulties and challenges when investigating this phenomena along with presenting open questions and future directions in researching the impact — both positive and negative — of cognitive biases in Information Retrieval
    • …
    corecore